Data Sparse Matrix Computations - Lecture 3 Scribe :

نویسنده

  • John Ryan
چکیده

Convolution as a Matrix/Vector multiplication Notice that (2) can be written as g = Y x where g is a column vector with elements gk = (x ∗ y)k, x is a column vector with elements xk, and Y is an NxN matrix. By examining (2), we can deduce that the elements of the first row of the matrix Y should be Y0,: = {y0, y−1, y−2, ..., y−(N−1)} Similarly, the second row should be Y1,: = {y1, y0, y−1, ..., y−(N−2)} Now, we may take advantage of the periodicity of the signal y. Namely, we note that y−1 = yN−1 y−2 = yN−2 and so on (see equation (1)). With this in mind, the rows of the matrix Y can now be written as

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Data-sparse matrix computations Lecture 25: Low Rank + Sparse Matrix Recovery

In the previous lecture, we observed that it is possible to recover a sparse solution to Ax = b by solving a minimization problem involving the 1-norm. In this lecture, we consider a matrix A that can be written as A = L+ S, where L is a low rank matrix and S is a sparse matrix, and seek a method that recovers L and S. We remark that Lecture 26 forms a sequal to these notes and addresses the te...

متن کامل

Data Sparse Matrix Computations - Lecture 4

domain ΩT domain ΩS "well-separated" xi yi Specifically, let’s say we have domains ΩS of N source points and ΩT of M target points, and these domains are “well-separated” (we will formalize this in section 3). Our goal is to compute the influence of all source points onto target points. Let the M × N matrix [K]ij = K(xi, yj) and assume it is approximately low-rank, so that K ≈ UV T with U of si...

متن کامل

HPF-2 Support for Dynamic Sparse Computations

There is a class of sparse matrix computations, such as direct solvers of systems of linear equations, that change the fill-in (nonzero entries) of the coefficient matrix, and involve row and column operations (pivoting). This paper addresses the problem of the parallelization of these sparse computations from the point of view of the parallel language and the compiler. Dynamic data structures ...

متن کامل

CS 49 : Data Stream Algorithms Lecture Notes , Fall 2011 Amit

Acknowledgements These lecture notes began as rough scribe notes for a Fall 2009 offering of the course " Data Stream Algorithms " at Dartmouth College. The initial scribe notes were prepared mostly by students enrolled in the course in 2009. Subsequently, during a Fall 2011 offering of the course, I edited the notes heavily, bringing them into presentable form, with the aim being to create a r...

متن کامل

Cs49: Data Stream Algorithms Lecture Notes, Fall 2011

Acknowledgements These lecture notes began as rough scribe notes for a Fall 2009 offering of the course " Data Stream Algorithms " at Dartmouth College. The initial scribe notes were prepared mostly by students enrolled in the course in 2009. Subsequently, during a Fall 2011 offering of the course, I edited the notes heavily, bringing them into presentable form, with the aim being to create a r...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2017